Upcoming Event: PhD Dissertation Defense
Yifan Zhang,
1 – 3PM
Tuesday Jan 27, 2026
POB 4.304
Tensors, the high-dimensional generalization of matrices, are the natural data structure for modeling high-order correlations and interactions. Despite their prevalence in recent developments - ranging from data science to quantum computing - working with tensors is extremely expensive in terms of both storage and computation due to their gigantic scale. The goal of this dissertation is to develop theories and efficient algorithms to mitigate this curse of dimensionality.
We focus on three primary strategies: randomized dimension reduction (sketching), implicit algorithms that avoid explicit tensor formation, and parallel/distributed computing.
The randomized approach accelerates the process of finding low-rank approximations by projecting the decomposition problem into a significantly lower-dimensional subspace. To ensure decomposition accuracy, one must determine how aggressive this projection can be while still preserving the geometry of the set of all low-rank tensors. This dissertation develops two widely applicable theories for such problems using covering numbers and norming sets. Beyond tensor decompositions, these theories yield new results in generalized polynomial optimization and generalization error bounds for deep neural networks.
Regarding implicit algorithms, this dissertation proposes a new method for learning conditionally independent nonparametric mixture models using the method of moments. This approach efficiently estimates statistics - such as the mean, variance, and the moment generating function - for each component with high robustness. We also establish theoretical guarantees, including identifiability and local convergence. Numerical experiments, including class averaging in X-ray Free Electron Laser (XFEL) data, confirm its advantages over classical methods like Expectation-Maximization (EM).
From a parallel computing perspective, the dissertation establishes a communication lower bound for distributed bilinear algorithms (e.g., matrix multiplication, tensor contraction, and convolution). Bilinear algorithms can naturally be encoded by tensors; the derived lower bound is based on the expansion properties of the underlying tensor. This bound applies generally to all bilinear algorithms and provides optimal or near-optimal results for many classical problems.
Finally, the dissertation introduces novel, efficient algorithms for tensor interpolative decompositions and online (streamed) CP decompositions, leveraging the techniques developed throughout this work.
Yifan Zhang is a Ph.D. candidate in the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin under Dr. Joe Kileel. Yifan received his B.S. degrees in respectively Applied Mathematics, Statistics, and Engineering Physics, and minors in CS and CSE from the University of Illinois at Urbana-Champaign in 2020. His current research interests include randomized numerical algorithms, efficient tensor decompositions and related applications, and numerical optimization.